LSTM Language Model¶
LSTM Language Model¶
- 
class openspeech.models.lstm_lm.model.LSTMLanguageModel(configs: omegaconf.dictconfig.DictConfig, tokenizer: openspeech.tokenizers.tokenizer.Tokenizer)[source]¶
- LSTM language model. Paper: http://www-i6.informatik.rwth-aachen.de/publications/download/820/Sundermeyer-2012.pdf - Parameters
- configs (DictConfig) – configuration set. 
- tokenizer (Tokenizer) – tokenizer is in charge of preparing the inputs for a model. 
 
 - Inputs:
- inputs (torch.FloatTensor): A input sequence passed to encoders. Typically for inputs this will be
- a padded FloatTensor of size - (batch, seq_length, dimension).
 
 
 - Returns
- Result of model predictions. 
- Return type
- outputs (dict) 
 
LSTM Language Model Configuration¶
- 
class openspeech.models.lstm_lm.configurations.LSTMLanguageModelConfigs(model_name: str = 'lstm_lm', num_layers: int = 3, hidden_state_dim: int = 512, dropout_p: float = 0.3, rnn_type: str = 'lstm', max_length: int = 128, teacher_forcing_ratio: float = 1.0, optimizer: str = 'adam')[source]¶
- This is the configuration class to store the configuration of a - LSTMLanguageModel.- It is used to initiated an LSTMLanguageModel model. - Configuration objects inherit from :class: ~openspeech.dataclass.configs.OpenspeechDataclass. - Parameters
- model_name (str) – Model name (default: lstm_lm) 
- num_layers (int) – The number of lstm layers. (default: 3) 
- hidden_state_dim (int) – The hidden state dimension of model. (default: 512) 
- dropout_p (float) – The dropout probability of encoder. (default: 0.3) 
- rnn_type (str) – Type of rnn cell (rnn, lstm, gru) (default: lstm) 
- max_length (int) – Max decoding length. (default: 128) 
- teacher_forcing_ratio (float) – The ratio of teacher forcing. (default: 1.0) 
- optimizer (str) – Optimizer for training. (default: adam)